28 research outputs found

    Algorithmic ramifications of prefetching in memory hierarchy

    Get PDF
    External Memory models, most notable being the I-O Model [3], capture the effects of memory hierarchy and aid in algorithm design. More than a decade of architectural advancements have led to new features not captured in the I-O model - most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized Random Access Machines under reasonable assumptions. Our work also explains the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not

    Shredder: GPU-Accelerated Incremental Storage and Computation

    Get PDF
    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-based chunking in a cost-effective manner. Unlike previous uses of GPUs, which have focused on applications where computation costs are dominant, Shredder is designed to operate in both compute-and dataintensive environments. To allow this, Shredder provides several novel optimizations aimed at reducing the cost of transferring data between host (CPU) and GPU, fully utilizing the multicore architecture at the host, and reducing GPU memory access latencies. With our optimizations, Shredder achieves a speedup of over 5X for chunking bandwidth compared to our optimized parallel implementation without a GPU on the same host system. Furthermore, we present two real world applications of Shredder: an extension to HDFS, which serves as a basis for incremental MapReduce computations, and an incremental cloud backup system. In both contexts, Shredder detects redundancies in the input data across successive runs, leading to significant savings in storage, computation, and end-to-end completion times.

    Study of Covid-19 Pandemic's Effect on the Mental Health of Migrant Workers

    No full text
    The COVID-19 epidemic has significantly changed social and professional settings in a number of ways. Social distancing laws, mandatory lockdowns, isolation periods, fear of getting sick, suspension of productive activities, loss of pay, and anxiety about the future all have an impact on residents' and employees' mental health. Pre-existing psychological morbidities, high prevalence of pre-existing physical health morbidities like respiratory disease, tuberculosis, and HIV infections, adverse psychosocial factors like lack of family support and caretaker throughout the crisis, and their limitations to follow the principles and rules of Anxiety, depression, PTSD, and sleep disorders are among the mental issues linked to the health crisis that are more likely to affect healthcare workers, particularly those on the front lines, migrant workers, and workers who interact with the general public. This occupational group is made even more prone to the occurrence of psychiatric illnesses by the blow of economic limitations brought on by labor shortages, the lack or suspension of activity safety and health-related basic regulations with associated activity dangers, and other factors. This review establishes the framework for a deeper comprehension of the psychological situations of workers during the pandemic, integrating individual and social perspectives and offering insight into doable individual, social, and activity approaches to the current "psychological pandemic." &nbsp

    Combating I-O bottleneck using prefetching: model, algorithms, and ramifications

    No full text
    Multiple memory models have been proposed to capture the effects of memory hierarchy culminating in the I-O model of Aggarwal and Vitter (Commun. ACM 31(9):1116-1127, [1988]). More than a decade of architectural advancements have led to new features that are not captured in the I-O model- most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design optimal algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized random access machines under reasonable assumptions. Our work also explains more precisely the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not
    corecore